artificial intelligence explainability
MIT Sloan research on artificial intelligence and machine learning
There's little question artificial intelligence and machine learning are playing an increased role in making business decisions. A 2022 survey of senior data and technology executives by NewVantage Partners found that 92% of large companies reported achieving returns on their data and AI investments -- an increase from 48% in 2017. But as these technologies enter the mainstream, new issues arise: How will they change the nature of workflow and workplace connection? Will they be ethically harnessed? Here's what to consider as AI and machine learning become omnipresent, according to MIT Sloan researchers, visiting scholars, and industry experts.
La veille de la cybersécurité
These programs also need to be integrated into an organization, and stakeholders -- particularly employees and customers -- need to trust that the AI program is accurate and trustworthy. This is the case for building enterprisewide artificial intelligence explainability, according to a new research briefing by Ida Someh, Barbara Wixom, and Cynthia Beath of the MIT Center for Information Systems Research. The researchers define artificial intelligence explainability as "the ability to manage AI initiatives in ways that ensure models are value-generating, compliant, representative, and reliable." Because artificial intelligence is still relatively new, there isn't an extensive list of proven use cases. Leaders are often uncertain if and how their company will see returns from AI programs.
Why companies need artificial intelligence explainability
These programs also need to be integrated into an organization, and stakeholders -- particularly employees and customers -- need to trust that the AI program is accurate and trustworthy. This is the case for building enterprisewide artificial intelligence explainability, according to a new research briefing by Ida Someh, Barbara Wixom, and Cynthia Beath of the MIT Center for Information Systems Research. The researchers define artificial intelligence explainability as "the ability to manage AI initiatives in ways that ensure models are value-generating, compliant, representative, and reliable." Because artificial intelligence is still relatively new, there isn't an extensive list of proven use cases. Leaders are often uncertain if and how their company will see returns from AI programs.